9 research outputs found

    Distinguishing Stationary/Nonstationary Scaling Processes Using Wavelet Tsallis q

    Get PDF
    Classification of processes as stationary or nonstationary has been recognized as an important and unresolved problem in the analysis of scaling signals. Stationarity or nonstationarity determines not only the form of autocorrelations and moments but also the selection of estimators. In this paper, a methodology for classifying scaling processes as stationary or nonstationary is proposed. The method is based on wavelet Tsallis q-entropies and particularly on the behaviour of these entropies for scaling signals. It is demonstrated that the observed wavelet Tsallis q-entropies of 1/f signals can be modeled by sum-cosh apodizing functions which allocates constant entropies to a set of scaling signals and varying entropies to the rest and that this allocation is controlled by q. The proposed methodology, therefore, differentiates stationary signals from non-stationary ones based on the observed wavelet Tsallis entropies for 1/f signals. Experimental studies using synthesized signals confirm that the proposed method not only achieves satisfactorily classifications but also outperforms current methods proposed in the literature

    Wavelet Fisher’s Information Measure of 1=fα Signals

    Get PDF
    This article defines the concept of wavelet-based Fisher’s information measure (wavelet FIM) and develops a closed-form expression of this measure for 1/fα signals. Wavelet Fisher’s information measure characterizes the complexities associated to 1/fα signals and provides a powerful tool for their analysis. Theoretical and experimental studies demonstrate that this quantity is exponentially increasing for α > 1 (non-stationary signals) and almost constant for α < 1 (stationary signals). Potential applications of wavelet FIM are discussed in some detail and its power and robustness for the detection of structural breaks in the mean embedded in stationary fractional Gaussian noise signals studied.Consejo Nacional de Ciencia y TecnologíaFOMIX-COQCY

    Wavelet q-Fisher Information for Scaling Signal Analysis

    Get PDF
    This article first introduces the concept of wavelet q-Fisher information and then derives a closed-form expression of this quantifier for scaling signals of parameter α. It is shown that this information measure appropriately describes the complexities of scaling signals and provides further analysis flexibility with the parameter q. In the limit of q→1, wavelet q-Fisher information reduces to the standard wavelet Fisher information and for q > 2 it reverses its behavior. Experimental results on synthesized fGn signals validates the level-shift detection capabilities of wavelet q-Fisher information. A comparative study also shows that wavelet q-Fisher information locates structural changes in correlated and anti-correlated fGn signals in a way comparable with standard breakpoint location techniques but at a fraction of the time. Finally, the application of this quantifier to H.263 encoded video signals is presented.Consejo Nacional de Ciencia y TecnologíaFOMIX-COQCY

    TomoSAR Imaging Using Statistical Regularization on Polarimetric SAR Observations

    No full text
    Aimed at estimating the position of the vertical structures that scatter the field back towards the active sensor, synthetic aperture radar (SAR) tomography (TomoSAR) focusing techniques (e.g., beamforming, Capon and multiple signal classification) determine how energy distributes over space. Furthermore, their polarimetric configurations pursue finding optimal polarization combinations in order to recover the polarimetric pseudo-power and extract the associated scattering mechanisms and height of reflectors, identified through the highest local maxima. Seeking to attain finer resolution, this article analyses the usage of statistical regularization on polarimetric SAR observations for TomoSAR imaging. The retrievals of conventional polarimetric TomoSAR focusing methods are to be refined in a post-processing step, making use of weighted covariance fitting. The main goal of this article is introducing the corresponding algorithm and pointing out, via simulations, the main advantages and drawbacks of the proposed novel strategy

    ESTIMATION OF STRUCTURED COVARIANCE MATRICES FOR TOMOSAR FOCUSING

    No full text
    Most common focusing techniques for Synthetic Aperture Radar (SAR) Tomography (TomoSAR), e.g. Matched Spatial Filtering and Capon, make use of the conventional sample covariance matrix, obtained from a finite number of observations. Yet, structured covariance matrix estimates can be employed in lieu of the sample covariance matrix. Accordingly, our simulation study shows that Capon's performance improves with the use of structured covariance matrices. These are obtained with the Subspace Fitting approach, properly adapted to TomoSAR. Numerical comparisons between structure and unstructured covariance matrices are presented

    The Beltrami SAR Framework for Multi-channel Despeckling

    Get PDF
    In this paper, a new framework for iterative speckle noise reduction in polarimetric synthetic aperture radar (SAR) data is introduced. Speckle is inherent to all coherent imaging systems and affects SAR imagery in the form of strong intensity variations in pixels with similar backscattering coefficients. This makes the interpretation of SAR data in several applications a difficult task. The proposed framework includes a preprocessing step capable of dealing with noise correlation usually found in single-look data. The general filtering approach is based on the Beltrami flow for denoising manifolds or images painted on manifolds. The principal contribution of this work is to adapt this approach to deal with covariance or coherency matrices instead of optical imagery. The evaluation presented suggests that this approach allows for good spatial and radiometric preservation compared to other state-of-the-art methods. Experiments are performed on the basis of synthetic and real-world experimental data. The validation of the proposed framework is accomplished using two refined error performance measures and the well-known effective number of looks measured. The source code of a parallel implementation of the proposed framework is released under the MPL 2.0 (https://www.mozilla.org/en-US/MPL/2.0/) alongside this paper

    REGULARIZATION PARAMETER SELECTION VIA L-CURVE AND Θ-CURVE APPROACHES TOWARDS TOMOSAR IMAGING

    No full text
    Nonlinear inverse problems like Synthetic Aperture Radar Tomography are often ill-posed, since their solutions are very sensitive to small perturbations in the input data and are, therefore, difficult to compute numerically. Ill-posed problems are commonly tackled with regularization approaches; however, there is a crucial problem in regularization, related to the selection of regularization parameters. In the search for optimal values of such regularization parameters, this article addresses an extension of the L-curve method, called Θ-curve. Furthermore, aimed at reducing converge time, the k-criterion is added, based on the first and second derivatives of the L-curve

    REGULARIZATION PARAMETER SELECTION FOR TOMOSAR IMAGING WITH SINGLE AND DUAL POLARIMETRIC OBSERVATIONS

    No full text
    Polarimetric focusing techniques for synthetic aperture radar (SAR) tomography (TomoSAR) pursue finding optimal polarization combinations to extract the associated scattering mechanisms and height of reflectors. Regularization approaches like weighted covariance fitting (WCF), implemented in an iterative manner, attain finer resolution than conventional focusing techniques (e.g., Capon). Such approaches normally entail the selection of a regularization parameter and a first estimate of the power spectrum pattern. Regularization parameter selection via L-Curve method requires providing the scattering vector; nonetheless, it may not be always available, especially when not working at full resolution. Manipulations previously done to the data covariance matrix (e.g., pre-summing) must be equivalent in the scattering vector, which may not be at all times feasible. Accordingly, this article suggests modifying the L-Curve method to work exclusively with data covariance matrices. The proposed novel strategy is applied to WCF, considering single and dual channels. Iterations are stopped based on the Akaike information criterion
    corecore